To ensure the safety of railroad operations, it is important to monitor and forecast track geometry irregularities. A higher safety requires forecasting with a higher spatiotemporal frequency. For forecasting with a high spatiotemporal frequency, it is necessary to capture spatial correlations. Additionally, track geometry irregularities are influenced by multiple exogenous factors. In this study, we propose a method to forecast one type of track geometry irregularity, vertical alignment, by incorporating spatial and exogenous factor calculations. The proposed method embeds exogenous factors and captures spatiotemporal correlations using a convolutional long short-term memory (ConvLSTM). In the experiment, we compared the proposed method with other methods in terms of the forecasting performance. Additionally, we conducted an ablation study on exogenous factors to examine their contribution to the forecasting performance. The results reveal that spatial calculations and maintenance record data improve the forecasting of the vertical alignment.
translated by 谷歌翻译
充分感知环境是机器人运动产生的关键因素。尽管引入深层视觉处理模型有助于扩展这种能力,但现有的方法缺乏积极修改感知内容的能力。人类在视觉认知过程中进行内部性能。本文通过提出一种新的机器人运动生成模型来解决问题,灵感来自人类的认知结构。该模型结合了一个由州驱动的主动自上而下的视觉注意模块,该模块获得了可以根据任务状态积极改变目标的注意事项。我们将这种注意力称为基于角色的注意力,因为获得的注意力集中在整个运动中共有连贯作用的目标。该模型经过了机器人工具使用任务的训练,在该任务中,基于角色的专注分别在对象拾取和对象拖动运动过程中将机器人抓手和工具视为相同的最终效果。这类似于一种称为工具体同化的生物学现象,其中一个人将处理工具视为身体的扩展。结果表明,模型的视觉感知的灵活性有所提高,即使为其提供了未经训练的工具或暴露于实验者的分心,也可以持续稳定的注意力和运动。
translated by 谷歌翻译
我们提出了一种使用条件生成对抗网络(CGANS)在机器人关节空间和潜在空间之间转换的新方法,以进行无碰撞路径计划,该方法仅捕获以障碍物图来捕获关节空间的无碰撞区域。操纵机器人臂时,很方便地生成多个合理的轨迹进行进一步选择。此外,出于安全原因,有必要生成轨迹,以避免与机器人本身或周围环境发生碰撞。在提出的方法中,可以通过将开始和目标状态与此生成的潜在空间中的任意线段连接起来和目标状态来产生各种轨迹。我们的方法提供了此无碰撞潜在空间,此后,任何使用任何优化条件的计划者都可以使用任何计划器来生成最合适的路径。我们通过模拟和实际的UR5E 6-DOF机器人臂成功验证了这种方法。我们确认可以根据优化条件的选择生成不同的轨迹。
translated by 谷歌翻译
我们实现了接触的灵活物体操作,这很难单独使用视力控制。在解压缩任务中,我们选择作为验证任务,夹具抓住拉动器,它隐藏袋子状态,例如其背后的变形的方向和量,使得仅通过视觉获取信息来执行任务。此外,柔性织物袋状态在操作期间不断变化,因此机器人需要动态地响应变化。然而,所有袋子状态的适当机器人行为难以提前准备。为了解决这个问题,我们开发了一种模型,可以通过具有触觉的视觉的实时预测来执行接触的灵活性对象操纵。我们介绍了一种基于点的注意机制,用于提取图像特征,Softmax转换来提取预测运动,以及用于提取触觉特征的卷积神经网络。使用真正的机器人手臂的实验结果表明,我们的方法可以实现响应袋子变形的运动,同时减少拉链上的负荷。此外,与单独的视觉相比,使用触觉从56.7%提高到93.3%,展示了我们方法的有效性和高性能。
translated by 谷歌翻译
The external visual inspections of rolling stock's underfloor equipment are currently being performed via human visual inspection. In this study, we attempt to partly automate visual inspection by investigating anomaly inspection algorithms that use image processing technology. As the railroad maintenance studies tend to have little anomaly data, unsupervised learning methods are usually preferred for anomaly detection; however, training cost and accuracy is still a challenge. Additionally, a researcher created anomalous images from normal images by adding noise, etc., but the anomalous targeted in this study is the rotation of piping cocks that was difficult to create using noise. Therefore, in this study, we propose a new method that uses style conversion via generative adversarial networks on three-dimensional computer graphics and imitates anomaly images to apply anomaly detection based on supervised learning. The geometry-consistent style conversion model was used to convert the image, and because of this the color and texture of the image were successfully made to imitate the real image while maintaining the anomalous shape. Using the generated anomaly images as supervised data, the anomaly detection model can be easily trained without complex adjustments and successfully detects anomalies.
translated by 谷歌翻译
A practical issue of edge AI systems is that data distributions of trained dataset and deployed environment may differ due to noise and environmental changes over time. Such a phenomenon is known as a concept drift, and this gap degrades the performance of edge AI systems and may introduce system failures. To address this gap, a retraining of neural network models triggered by concept drift detection is a practical approach. However, since available compute resources are strictly limited in edge devices, in this paper we propose a lightweight concept drift detection method in cooperation with a recently proposed on-device learning technique of neural networks. In this case, both the neural network retraining and the proposed concept drift detection are done by sequential computation only to reduce computation cost and memory utilization. Evaluation results of the proposed approach shows that while the accuracy is decreased by 3.8%-4.3% compared to existing batch-based detection methods, it decreases the memory size by 88.9%-96.4% and the execution time by 1.3%-83.8%. As a result, the combination of the neural network retraining and the proposed concept drift detection method is demonstrated on Raspberry Pi Pico that has 264kB memory.
translated by 谷歌翻译
Owing to the widespread adoption of the Internet of Things, a vast amount of sensor information is being acquired in real time. Accordingly, the communication cost of data from edge devices is increasing. Compressed sensing (CS), a data compression method that can be used on edge devices, has been attracting attention as a method to reduce communication costs. In CS, estimating the appropriate compression ratio is important. There is a method to adaptively estimate the compression ratio for the acquired data using reinforcement learning. However, the computational costs associated with existing reinforcement learning methods that can be utilized on edges are expensive. In this study, we developed an efficient reinforcement learning method for edge devices, referred to as the actor--critic online sequential extreme learning machine (AC-OSELM), and a system to compress data by estimating an appropriate compression ratio on the edge using AC-OSELM. The performance of the proposed method in estimating the compression ratio is evaluated by comparing it with other reinforcement learning methods for edge devices. The experimental results show that AC-OSELM achieved the same or better compression performance and faster compression ratio estimation than the existing methods.
translated by 谷歌翻译
IR models using a pretrained language model significantly outperform lexical approaches like BM25. In particular, SPLADE, which encodes texts to sparse vectors, is an effective model for practical use because it shows robustness to out-of-domain datasets. However, SPLADE still struggles with exact matching of low-frequency words in training data. In addition, domain shifts in vocabulary and word frequencies deteriorate the IR performance of SPLADE. Because supervision data are scarce in the target domain, addressing the domain shifts without supervision data is necessary. This paper proposes an unsupervised domain adaptation method by filling vocabulary and word-frequency gaps. First, we expand a vocabulary and execute continual pretraining with a masked language model on a corpus of the target domain. Then, we multiply SPLADE-encoded sparse vectors by inverse document frequency weights to consider the importance of documents with lowfrequency words. We conducted experiments using our method on datasets with a large vocabulary gap from a source domain. We show that our method outperforms the present stateof-the-art domain adaptation method. In addition, our method achieves state-of-the-art results, combined with BM25.
translated by 谷歌翻译
Embodied Instruction Following (EIF) studies how mobile manipulator robots should be controlled to accomplish long-horizon tasks specified by natural language instructions. While most research on EIF are conducted in simulators, the ultimate goal of the field is to deploy the agents in real life. As such, it is important to minimize the data cost required for training an agent, to help the transition from sim to real. However, many studies only focus on the performance and overlook the data cost -- modules that require separate training on extra data are often introduced without a consideration on deployability. In this work, we propose FILM++ which extends the existing work FILM with modifications that do not require extra data. While all data-driven modules are kept constant, FILM++ more than doubles FILM's performance. Furthermore, we propose Prompter, which replaces FILM++'s semantic search module with language model prompting. Unlike FILM++'s implementation that requires training on extra sets of data, no training is needed for our prompting based implementation while achieving better or at least comparable performance. Prompter achieves 42.64% and 45.72% on the ALFRED benchmark with high-level instructions only and with step-by-step instructions, respectively, outperforming the previous state of the art by 6.57% and 10.31%.
translated by 谷歌翻译
本文从未分割的烹饪视频中解决了食谱生成,该任务要求代理(1)提取完成盘子时提取关键事件,以及(2)为提取的事件生成句子。我们的任务类似于密集的视频字幕(DVC),该字幕旨在彻底检测事件并为其生成句子。但是,与DVC不同,在食谱生成中,食谱故事意识至关重要,模型应以正确的顺序输出适当数量的关键事件。我们分析了DVC模型的输出,并观察到,尽管(1)几个事件可作为食谱故事采用,但(2)此类事件的生成句子并未基于视觉内容。基于此,我们假设我们可以通过从DVC模型的输出事件中选择Oracle事件并为其重新生成句子来获得正确的配方。为了实现这一目标,我们提出了一种基于变压器的新型训练事件选择器和句子生成器的联合方法,用于从DVC模型的输出中选择Oracle事件并分别为事件生成接地句子。此外,我们通过包括成分来生成更准确的配方来扩展模型。实验结果表明,所提出的方法优于最先进的DVC模型。我们还确认,通过以故事感知方式对食谱进行建模,提出的模型以正确的顺序输出适当数量的事件。
translated by 谷歌翻译